2,830 research outputs found

    Assessment of learning tomography using Mie theory

    Full text link
    In Optical diffraction tomography, the multiply scattered field is a nonlinear function of the refractive index of the object. The Rytov method is a linear approximation of the forward model, and is commonly used to reconstruct images. Recently, we introduced a reconstruction method based on the Beam Propagation Method (BPM) that takes the nonlinearity into account. We refer to this method as Learning Tomography (LT). In this paper, we carry out simulations in order to assess the performance of LT over the linear iterative method. Each algorithm has been rigorously assessed for spherical objects, with synthetic data generated using the Mie theory. By varying the RI contrast and the size of the objects, we show that the LT reconstruction is more accurate and robust than the reconstruction based on the linear model. In addition, we show that LT is able to correct distortion that is evident in Rytov approximation due to limitations in phase unwrapping. More importantly, the capacity of LT in handling multiple scattering problem are demonstrated by simulations of multiple cylinders using the Mie theory and confirmed by experimental results of two spheres

    Limited-angle tomographic reconstruction of dense layered objects by dynamical machine learning

    Full text link
    Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regularizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al, Proc. Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as better fit to regularize the reconstructions. We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the fundamental building block. Through comprehensive comparison of several quantitative metrics, we show that the dynamic method improves upon previous static approaches with fewer artifacts and better overall reconstruction fidelity.Comment: 12 pages, 7 figures, 2 table

    Powerset-Like Monads Weakly Distribute over Themselves in Toposes and Compact Hausdorff Spaces

    Get PDF
    The powerset monad on the category of sets does not distribute over itself. Nevertheless a weaker form of distributive law of the powerset monad over itself exists and it essentially stems from the canonical Egli-Milner extension of the powerset to the category of relations. On the other hand, any regular category yields a category of relations, and some regular categories also possess a powerset-like monad, as is the Vietoris monad on compact Hausdorff spaces. We derive the Egli-Milner extension in three different frameworks : sets, toposes, and compact Hausdorff spaces. We prove that it corresponds to a monotone weak distributive law in each case by showing that the multiplication extends to relations but the unit does not. We provide an application to coalgebraic determinization of alternating automata

    A Learning Approach to Optical Tomography

    Full text link
    We describe a method for imaging 3D objects in a tomographic configuration implemented by training an artificial neural network to reproduce the complex amplitude of the experimentally measured scattered light. The network is designed such that the voxel values of the refractive index of the 3D object are the variables that are adapted during the training process. We demonstrate the method experimentally by forming images of the 3D refractive index distribution of cells

    Low Photon Count Phase Retrieval Using Deep Learning

    Full text link
    Imaging systems' performance at low light intensity is affected by shot noise, which becomes increasingly strong as the power of the light source decreases. In this paper we experimentally demonstrate the use of deep neural networks to recover objects illuminated with weak light and demonstrate better performance than with the classical Gerchberg-Saxton phase retrieval algorithm for equivalent signal over noise ratio. Prior knowledge about the object is implicitly contained in the training data set and feature detection is possible for a signal over noise ratio close to one. We apply this principle to a phase retrieval problem and show successful recovery of the object's most salient features with as little as one photon per detector pixel on average in the illumination beam. We also show that the phase reconstruction is significantly improved by training the neural network with an initial estimate of the object, as opposed as training it with the raw intensity measurement.Comment: 8 pages, 5 figure

    Too far, too small, too dark, too foggy: On the use of machine learning for computational imaging problems

    Get PDF
    Computational imaging system design involves the joint optimization of hardware and software to deliver high fidelity images to a human user or artificial intelligence (AI) algorithm. For example, in medical tomography CAT scanners produce non-invasively cross-sectional images of the patient’s organs and then medical professionals or, increasingly, automated recognition systems perform diagnosis and decide upon a course of treatment. We refer to this operation of AI as image interpretation. This talk is about a different paradigm where machine learning (ML) is used at the step of image formation itself, i.e. for image reconstruction rather than interpretation. The ML algorithm, typically implemented as a deep neural network (DNN), is programmed using physically generated or rigorously simulated examples of objects and their associated signals produced on the sensor (or camera.) The training phase consists of adjusting the connection weights of the DNN until it becomes possible, given the sensor signal from a hitherto unseen object, for the DNN to yield an accurate estimate of the object’s spatial structure. The ML approach to solving inverse problems in such fashion has its roots in optimization methods employed long before in computational imaging, compressed sensing and dictionaries in particular. By replacing the proximal gradient step of the optimization with a DNN [K. Gregor & Y. LeCun, ICML 2010], it becomes possible to learn priors other than sparsity, and restrict the object class almost arbitrarily to facilitate the solution of “hard” inverse problems, e.g. highly ill-posed and highly noisy at the same time. Moreover, execution becomes very fast because pre-trained DNNs mostly consist of forward computations which can easily be run at real time, whereas traditional compressed sensing optimization routines are generally iterative. DNN training is time consuming too, but it is only run up front while developing the algorithm; it is not a burden during operation. Unfortunately, however, with the DNN approach some of the nice properties of compressed sensing are lost, most notably convexity. In this talk we will review these basic developments and then discuss in detail their application to the specific problem of phase retrieval in lensless (free-space propagation) or defocused imaging systems. More specifically, we will investigate the impact of the power spectral density of the training example database on the quality of the reconstructions. We will review a sequence of papers where we first ignored this problem [A. Sinha et al, Optica 4:1117, 2017], then improved it in an ad hoc way by pre-modulation of the training examples [Li Shuai et al, Opt. Express 26:29340, 2018] and finally devised a dual-band approach where the signal is first separated into its low- and high-frequency components, their respective reconstructions are obtained by two DNNs trained separately and then re-composed by a third “synthesizer” DNN [Deng Mo et al, arXiv:1811.07945]. We will explain why each new attempt improves resolution and overall fidelity through progressively more balanced treatment of the spatial frequency spectrum. We will also discuss implications of this method for phase retrieval under extremely low-photon (too dark) conditions [A. Goy et al, Phys. Rev. Lett. 121:243902, 2018] other related inverse problems, e.g. super resolution (too far or too small), and imaging through diffusers (too foggy.
    • …
    corecore